5 research outputs found
Prompt-based All-in-One Image Restoration using CNNs and Transformer
Image restoration aims to recover the high-quality images from their degraded
observations. Since most existing methods have been dedicated into single
degradation removal, they may not yield optimal results on other types of
degradations, which do not satisfy the applications in real world scenarios. In
this paper, we propose a novel data ingredient-oriented approach that leverages
prompt-based learning to enable a single model to efficiently tackle multiple
image degradation tasks. Specifically, we utilize a encoder to capture features
and introduce prompts with degradation-specific information to guide the
decoder in adaptively recovering images affected by various degradations. In
order to model the local invariant properties and non-local information for
high-quality image restoration, we combined CNNs operations and Transformers.
Simultaneously, we made several key designs in the Transformer blocks
(multi-head rearranged attention with prompts and simple-gate feed-forward
network) to reduce computational requirements and selectively determines what
information should be persevered to facilitate efficient recovery of
potentially sharp images. Furthermore, we incorporate a feature fusion
mechanism further explores the multi-scale information to improve the
aggregated features. The resulting tightly interlinked hierarchy architecture,
named as CAPTNet, despite being designed to handle different types of
degradations, extensive experiments demonstrate that our method performs
competitively to the task-specific algorithms
SU-Net: Pose estimation network for non-cooperative spacecraft on-orbit
Spacecraft pose estimation plays a vital role in many on-orbit space
missions, such as rendezvous and docking, debris removal, and on-orbit
maintenance. At present, space images contain widely varying lighting
conditions, high contrast and low resolution, pose estimation of space objects
is more challenging than that of objects on earth. In this paper, we analyzing
the radar image characteristics of spacecraft on-orbit, then propose a new deep
learning neural Network structure named Dense Residual U-shaped Network
(DR-U-Net) to extract image features. We further introduce a novel neural
network based on DR-U-Net, namely Spacecraft U-shaped Network (SU-Net) to
achieve end-to-end pose estimation for non-cooperative spacecraft.
Specifically, the SU-Net first preprocess the image of non-cooperative
spacecraft, then transfer learning was used for pre-training. Subsequently, in
order to solve the problem of radar image blur and low ability of spacecraft
contour recognition, we add residual connection and dense connection to the
backbone network U-Net, and we named it DR-U-Net. In this way, the feature loss
and the complexity of the model is reduced, and the degradation of deep neural
network during training is avoided. Finally, a layer of feedforward neural
network is used for pose estimation of non-cooperative spacecraft on-orbit.
Experiments prove that the proposed method does not rely on the hand-made
object specific features, and the model has robust robustness, and the
calculation accuracy outperforms the state-of-the-art pose estimation methods.
The absolute error is 0.1557 to 0.4491 , the mean error is about 0.302 , and
the standard deviation is about 0.065 .Comment: We need to overhaul the paper and innovat